Picture for Kechi Zhang

Kechi Zhang

Do Transformers Have the Ability for Periodicity Generalization?

Add code
Jan 30, 2026
Viaarxiv icon

KOCO-BENCH: Can Large Language Models Leverage Domain Knowledge in Software Development?

Add code
Jan 19, 2026
Viaarxiv icon

Weights to Code: Extracting Interpretable Algorithms from the Discrete Transformer

Add code
Jan 09, 2026
Viaarxiv icon

RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy Optimization

Add code
Jul 31, 2025
Viaarxiv icon

A Survey on Code Generation with LLM-based Agents

Add code
Jul 31, 2025
Figure 1 for A Survey on Code Generation with LLM-based Agents
Figure 2 for A Survey on Code Generation with LLM-based Agents
Figure 3 for A Survey on Code Generation with LLM-based Agents
Figure 4 for A Survey on Code Generation with LLM-based Agents
Viaarxiv icon

SATURN: SAT-based Reinforcement Learning to Unleash Language Model Reasoning

Add code
May 22, 2025
Viaarxiv icon

FANformer: Improving Large Language Models Through Effective Periodicity Modeling

Add code
Feb 28, 2025
Viaarxiv icon

FAN: Fourier Analysis Networks

Add code
Oct 03, 2024
Figure 1 for FAN: Fourier Analysis Networks
Figure 2 for FAN: Fourier Analysis Networks
Figure 3 for FAN: Fourier Analysis Networks
Figure 4 for FAN: Fourier Analysis Networks
Viaarxiv icon

Self-Edit: Fault-Aware Code Editor for Code Generation

Add code
May 06, 2023
Viaarxiv icon

Implant Global and Local Hierarchy Information to Sequence based Code Representation Models

Add code
Mar 14, 2023
Viaarxiv icon